我们为级别集方法提出了一个数据驱动的均值曲线求解器。这项工作是我们在[arxiv:2201.12342] [1]和[doi:10.1016/j.jcp.2022.1111291] [arxiv:2201.12342] [1]中的二维策略的$ \ mathbb {r}^3 $的自然扩展。 ]。但是,与[1,2]建立了依赖分辨率的神经网络词典相比,在这里,我们在$ \ mathbb {r}^3 $中开发了两对模型,而不管网格大小如何。我们的前馈网络摄入的水平集,梯度和曲率数据转换为固定接口节点的数值均值曲率近似值。为了降低问题的复杂性,我们使用高斯曲率对模板进行了分类,并将模型分别适合于非堆肥和鞍模式。非插图模板更容易处理,因为它们表现出以单调性和对称性为特征的曲率误差分布。尽管后者允许我们仅在平均曲面频谱的一半上进行训练,但前者帮助我们将数据驱动的融合并在平坦区域附近无缝地融合了基线估计。另一方面,鞍形图案误差结构不太清楚。因此,我们没有利用超出已知信息的潜在信息。在这方面,我们不仅在球形和正弦和双曲线抛物面斑块上训练了我们的模型。我们构建他们的数据集的方法是系统的,但是随机收集样品,同时确保均衡度。我们还诉诸于标准化和降低尺寸,作为预处理步骤和集成正则化以最大程度地减少异常值。此外,我们利用曲率旋转/反射不变性在推理时提高精度。几项实验证实,与现代粒子的界面重建和水平设定方案相比,我们提出的系统可以产生更准确的均值曲线估计。
translated by 谷歌翻译
我们提出了一种基于错误的神经模型模型,用于在级别集方法中近似二维曲率。我们的主要贡献是重新设计的混合求解器[Larios-C \'Ardenas和Gibou,J。Comput。物理。 (2022年5月),10.1016/j.jcp.2022.111291]依靠数值方案来按需启用机器学习操作。特别是,我们的常规特征是双重预测对线束曲率对称不变性,以支持精度和稳定性。该求解器的核心是在圆形和正弦式接口样品上训练的多层感知器。它的作用是量化数值曲率近似值中的误差,并沿自由边界发射校正的校正估计值。这些校正是针对预处理上下文级别,曲率和梯度数据而产生的。为了促进神经能力,我们采用了样品阴性屈肌的归一化,重新定位和基于反射的增强。以相同的方式,我们的系统结合了降低,平衡性良好和正则化,以最大程度地减少外围影响。我们的训练方法同样可以跨网格尺寸扩展。为此,我们在数据生产过程中引入了无量纲的参数化和概率子采样。总之,所有这些元素都提高了分辨不足区域周围曲率计算的准确性和效率。在大多数实验中,我们的策略的表现优于数值基线,是重新涉及步骤数的两倍,同时仅需要一小部分成本。
translated by 谷歌翻译
我们提出了一个机器学习框架,该框架将图像超分辨率技术与级别测量方法中的被动标量传输融为一体。在这里,我们研究是否可以计算直接数据驱动的校正,以最大程度地减少界面的粗晶石演化中的数值粘度。拟议的系统的起点是半拉格朗日配方。并且,为了减少数值耗散,我们引入了一个易于识别的多层感知器。该神经网络的作用是改善数值估计的表面轨迹。为此,它在单个时间范围内处理局部级别集,速度和位置数据,以便在移动前部附近的选择顶点。因此,我们的主要贡献是一种新型的机器学习调音算法,该算法与选择性重新融为一体并与常规对流交替运行,以保持调整后的界面轨迹平滑。因此,我们的程序比基于全卷卷积的应用更有效,因为它仅在自由边界周围集中计算工作。同样,我们通过各种测试表明,我们的策略有效地抵消了数值扩散和质量损失。例如,在简单的对流问题中,我们的方法可以达到与基线方案相同的精度,分辨率是分辨率的两倍,但成本的一小部分。同样,我们的杂种技术可以产生可行的固化前端,以进行结晶过程。另一方面,切向剪切流和高度变形的模拟会导致偏置伪像和推理恶化。同样,严格的设计速度约束可以将我们的求解器的应用限制为涉及快速接口更改的问题。在后一种情况下,我们已经确定了几个机会来增强鲁棒性,而没有放弃我们的方法的基本概念。
translated by 谷歌翻译
我们提出了一种基于机器学习的新型混合策略,以改善水平集方法中的曲率估计。提出的推理系统伴侣使用标准数值方案增强了神经网络,以更准确地计算曲率。我们混合框架的核心是一种开关机制,依赖于确定的数值技术来衡量曲率。如果曲率幅度大于依赖分辨率的阈值,则使用神经网络来产生更好的近似值。我们的网络是安装在各种配置下由正弦和圆形接口样品组成的合成数据集的多层感知器。为了降低数据集大小和训练复杂性,我们利用问题的特征对称性,并在曲率光谱的一半上构建模型。这些储蓄导致一个强大的推理系统能够仅胜过其任何数值或神经成分。具有固定,平滑接口的实验表明,我们的混合求解器在粗网格和陡峭的界面区域中明显优于常规数值方法。与先前的研究相比,我们已经观察到通过从多个接口类型的数据对训练回归模型后的精确提高,并使用专门的输入预处理转换数据。特别是,我们的发现证实机器学习是减少或消除级别方法中质量损失的有希望的场所。
translated by 谷歌翻译
我们提出了一种深度学习策略,以估计级别方法中二维隐式接口的平均曲率。我们的方法是基于拟合馈送的神经网络与由沉浸在各种分辨率均匀网格中的圆形界面构建的合成数据集。这些多层感知器处理自由边界旁边的网格点的级别值,并在接口上最接近的位置输出无量纲曲率。在统一和自适应网格中,涉及不规则界面的精确分析表明,我们的模型与$ l^1 $和$ l^2 $规范中的传统数值方案具有竞争力。特别是,当界面具有陡峭的曲率区域以及重新初始化水平集函数的迭代次数时,我们的神经网络在粗分辨率中以可比精度近似于曲率。尽管传统的数值方法比我们的框架更强大,但我们的结果揭示了机器学习的潜力,以处理已知级别方法遇到困难的计算任务。我们还确定,与通用神经网络相比,可以设计出依赖于应用程序的局部分辨率的局部分辨率图来更有效地估计平均曲率。
translated by 谷歌翻译
Ithaca is a Fuzzy Logic (FL) plugin for developing artificial intelligence systems within the Unity game engine. Its goal is to provide an intuitive and natural way to build advanced artificial intelligence systems, making the implementation of such a system faster and more affordable. The software is made up by a C\# framework and an Application Programming Interface (API) for writing inference systems, as well as a set of tools for graphic development and debugging. Additionally, a Fuzzy Control Language (FCL) parser is provided in order to import systems previously defined using this standard.
translated by 谷歌翻译
Data deprivation, or the lack of easily available and actionable information on the well-being of individuals, is a significant challenge for the developing world and an impediment to the design and operationalization of policies intended to alleviate poverty. In this paper we explore the suitability of data derived from OpenStreetMap to proxy for the location of two crucial public services: schools and health clinics. Thanks to the efforts of thousands of digital humanitarians, online mapping repositories such as OpenStreetMap contain millions of records on buildings and other structures, delineating both their location and often their use. Unfortunately much of this data is locked in complex, unstructured text rendering it seemingly unsuitable for classifying schools or clinics. We apply a scalable, unsupervised learning method to unlabeled OpenStreetMap building data to extract the location of schools and health clinics in ten countries in Africa. We find the topic modeling approach greatly improves performance versus reliance on structured keys alone. We validate our results by comparing schools and clinics identified by our OSM method versus those identified by the WHO, and describe OSM coverage gaps more broadly.
translated by 谷歌翻译
In this paper, we present an evolved version of the Situational Graphs, which jointly models in a single optimizable factor graph, a SLAM graph, as a set of robot keyframes, containing its associated measurements and robot poses, and a 3D scene graph, as a high-level representation of the environment that encodes its different geometric elements with semantic attributes and the relational information between those elements. Our proposed S-Graphs+ is a novel four-layered factor graph that includes: (1) a keyframes layer with robot pose estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer encompassing sets of wall planes, and (4) a floors layer gathering the rooms within a given floor level. The above graph is optimized in real-time to obtain a robust and accurate estimate of the robot's pose and its map, simultaneously constructing and leveraging the high-level information of the environment. To extract such high-level information, we present novel room and floor segmentation algorithms utilizing the mapped wall planes and free-space clusters. We tested S-Graphs+ on multiple datasets including, simulations of distinct indoor environments, on real datasets captured over several construction sites and office environments, and on a real public dataset of indoor office environments. S-Graphs+ outperforms relevant baselines in the majority of the datasets while extending the robot situational awareness by a four-layered scene model. Moreover, we make the algorithm available as a docker file.
translated by 谷歌翻译
Algorithms that involve both forecasting and optimization are at the core of solutions to many difficult real-world problems, such as in supply chains (inventory optimization), traffic, and in the transition towards carbon-free energy generation in battery/load/production scheduling in sustainable energy systems. Typically, in these scenarios we want to solve an optimization problem that depends on unknown future values, which therefore need to be forecast. As both forecasting and optimization are difficult problems in their own right, relatively few research has been done in this area. This paper presents the findings of the ``IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling," held in 2021. We present a comparison and evaluation of the seven highest-ranked solutions in the competition, to provide researchers with a benchmark problem and to establish the state of the art for this benchmark, with the aim to foster and facilitate research in this area. The competition used data from the Monash Microgrid, as well as weather data and energy market data. It then focused on two main challenges: forecasting renewable energy production and demand, and obtaining an optimal schedule for the activities (lectures) and on-site batteries that lead to the lowest cost of energy. The most accurate forecasts were obtained by gradient-boosted tree and random forest models, and optimization was mostly performed using mixed integer linear and quadratic programming. The winning method predicted different scenarios and optimized over all scenarios jointly using a sample average approximation method.
translated by 谷歌翻译
The study aims the development of a wearable device to combat the onslaught of covid-19. Likewise, to enhance the regular face shield available in the market. Furthermore, to raise awareness of the health and safety protocols initiated by the government and its affiliates in the enforcement of social distancing with the integration of computer vision algorithms. The wearable device was composed of various hardware and software components such as a transparent polycarbonate face shield, microprocessor, sensors, camera, thin-film transistor on-screen display, jumper wires, power bank, and python programming language. The algorithm incorporated in the study was object detection under computer vision machine learning. The front camera with OpenCV technology determines the distance of a person in front of the user. Utilizing TensorFlow, the target object identifies and detects the image or live feed to get its bounding boxes. The focal length lens requires the determination of the distance from the camera to the target object. To get the focal length, multiply the pixel width by the known distance and divide it by the known width (Rosebrock, 2020). The deployment of unit testing ensures that the parameters are valid in terms of design and specifications.
translated by 谷歌翻译